Goto

Collaborating Authors

 disparate treatment


Appendix

Neural Information Processing Systems

Whilecreatingasystem thatmakesexplicit useofaprotected attributewhen making decisions demonstrates intent, itis not the only way to do so. In particular, as it is difficult to explicitly demonstrate intent when someone iseither unable orunwilling toexplain honestly whytheymade decisions, thecourts recognize indirect evidence of the form: ". . .



Appendix A Legal Implications of our Analysis

Neural Information Processing Systems

What is less straightforward is the relationship of the methods that we have shown to have the same systematic behavior as our new approach. Overview Our argument can be decomposed into three parts. We address each point in detail below: 1.



Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases

An, Haozhe, Rudinger, Rachel

arXiv.org Artificial Intelligence

Through the use of first name substitution experiments, prior research has demonstrated the tendency of social commonsense reasoning models to systematically exhibit social biases along the dimensions of race, ethnicity, and gender (An et al., 2023). Demographic attributes of first names, however, are strongly correlated with corpus frequency and tokenization length, which may influence model behavior independent of or in addition to demographic factors. In this paper, we conduct a new series of first name substitution experiments that measures the influence of these factors while controlling for the others. We find that demographic attributes of a name (race, ethnicity, and gender) and name tokenization length are both factors that systematically affect the behavior of social commonsense reasoning models.


How Do You Define Unfair Bias in AI? G.R. Jenkin & Associates

#artificialintelligence

Art is subjective and everyone has their own opinion about it. When I saw the expressionist painting Blue Poles, by Jackson Pollock, I was reminded of the famous quote by Rudyard Kipling, "It's clever, but is it Art?" Pollock's piece looks like paint messily spilled onto a drop sheet protecting the floor. The debate of what constitutes art has a long history that will probably never be settled, there is no definitive definition of art. Similarly, there is no broadly accepted objective definition for the quality of a piece of art, with the closest definition being from Orson Welles, "I don't know anything about art but I know what I like." Similarly, people recognize unfair bias when they see it, but it is quite difficult to create a single objective definition.


From Discrimination in Machine Learning to Discrimination in Law, Part 1: Disparate Treatment

#artificialintelligence

Around 60 years ago, the U.S. Department of Justice Civil Rights Division was established for prohibiting discrimination based on protected attributes. Over these 60 years, they established a set of policies and guidelines to identify and penalize those who discriminate1. The widespread use of machine learning (ML) models in routine life has prompted researchers to begin studying the extent to which these models are discriminatory. However, some researcher are unaware that the legal system already has well established procedures for describing and proving discrimination in law. In this series of blog posts, we'll try to bridge this gap.


Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

Lohaus, Michael, Kleindessner, Matthäus, Kenthapadi, Krishnaram, Locatello, Francesco, Russell, Chris

arXiv.org Artificial Intelligence

We show that deep networks trained to satisfy demographic parity often do so through a form of race or gender awareness, and that the more we force a network to be fair, the more accurately we can recover race or gender from the internal state of the network. Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task. After training the two-headed network, we enforce demographic parity by merging the two heads, creating a network with the same architecture as the original network. We establish a close relationship between existing approaches and our approach by showing (1) that the decisions of a fair classifier are well-approximated by our approach, and (2) that an unfair and optimally accurate classifier can be recovered from a fair classifier and our second head predicting the protected attribute. We use our explicit formulation to argue that the existing fairness approaches, just as ours, demonstrate disparate treatment and that they are likely to be unlawful in a wide range of scenarios under US law.


How Do You Define Unfair Bias in AI?

#artificialintelligence

Art is subjective and everyone has their own opinion about it. When I saw the expressionist painting Blue Poles, by Jackson Pollock, I was reminded of the famous quote by Rudyard Kipling, "It's clever, but is it Art?" Pollock's piece looks like paint messily spilled onto a drop sheet protecting the floor. The debate of what constitutes art has a long history that will probably never be settled, there is no definitive definition of art. Similarly, there is no broadly accepted objective definition for the quality of a piece of art, with the closest definition being from Orson Welles, "I don't know anything about art but I know what I like." Similarly, people recognize unfair bias when they see it, but it is quite difficult to create a single objective definition.


AI in hiring might do more harm than good

#artificialintelligence

The use of artificial intelligence in the hiring process has increased in recent years with companies turning to automated assessments, digital interviews, and data analytics to parse through resumes and screen candidates. But as IT strives for better diversity, equity, and inclusion (DEI), it turns out AI can do more harm than help if companies aren't strategic and thoughtful about how they implement the technology. "The bias usually comes from the data. If you don't have a representative data set, or any number of characteristics that you decide on, then of course you're not going to be properly, finding and evaluating applicants," says Jelena Kovačević, IEEE Fellow, William R. Berkley Professor, and Dean of the NYU Tandon School of Engineering. The chief issue with AI's use in hiring is that, in an industry that has been predominantly male and white for decades, the historical data on which AI hiring systems are built will ultimately have an inherent bias.